Goto

Collaborating Authors

 random graph


Maximum entropy based testing in network models: ERGMs and constrained optimization

Ghosh, Subhrosekhar, Karmakar, Rathindra Nath, Lahiry, Samriddha

arXiv.org Machine Learning

Stochastic network models play a central role across a wide range of scientific disciplines, and questions of statistical inference arise naturally in this context. In this paper we investigate goodness-of-fit and two-sample testing procedures for statistical networks based on the principle of maximum entropy (MaxEnt). Our approach formulates a constrained entropy-maximization problem on the space of networks, subject to prescribed structural constraints. The resulting test statistics are defined through the Lagrange multipliers associated with the constrained optimization problem, which, to our knowledge, is novel in the statistical networks literature. We establish consistency in the classical regime where the number of vertices is fixed. We then consider asymptotic regimes in which the graph size grows with the sample size, developing tests for both dense and sparse settings. In the dense case, we analyze exponential random graph models (ERGM) (including the Erdös-Rènyi models), while in the sparse regime our theory applies to Erd{ö}s-R{è}nyi graphs. Our analysis leverages recent advances in nonlinear large deviation theory for random graphs. We further show that the proposed Lagrange-multiplier framework connects naturally to classical score tests for constrained maximum likelihood estimation. The results provide a unified entropy-based framework for network model assessment across diverse growth regimes.



A Broader impact

Neural Information Processing Systems

It is essential to approach the interpretation of our algorithm's results with caution and subject them to critical evaluation. In this section, we provide the definition of partial ancestral graphs (P AGs). A P AG shares the same adjacencies as any MAG in the observational equivalence class of MAGs. Section 2. For any v W, let G In this section, we derive the causal effect for the SMCM in Figure 3(top), i.e., (6), as well as prove D.1 Proof of (6) First, using the law of total probability, we have P(y |do (t = t)) = null Rule 3a, (c) follows from Rule 1, and (g) follows from Rule 2. D.2 Proof of Theorem 3.1 Lemma 1. Suppose Assumptions 1 to 3 hold. Given this claim, Theorem 3.1 follows from Tian and Pearl [2002, Theorem 4].